23 research outputs found

    ABC: A Simple Explicit Congestion Controller for Wireless Networks

    Full text link
    We propose Accel-Brake Control (ABC), a simple and deployable explicit congestion control protocol for network paths with time-varying wireless links. ABC routers mark each packet with an "accelerate" or "brake", which causes senders to slightly increase or decrease their congestion windows. Routers use this feedback to quickly guide senders towards a desired target rate. ABC requires no changes to header formats or user devices, but achieves better performance than XCP. ABC is also incrementally deployable; it operates correctly when the bottleneck is a non-ABC router, and can coexist with non-ABC traffic sharing the same bottleneck link. We evaluate ABC using a Wi-Fi implementation and trace-driven emulation of cellular links. ABC achieves 30-40% higher throughput than Cubic+Codel for similar delays, and 2.2X lower delays than BBR on a Wi-Fi path. On cellular network paths, ABC achieves 50% higher throughput than Cubic+Codel

    Is the Web ready for HTTP/2 Server Push?

    Full text link
    HTTP/2 supersedes HTTP/1.1 to tackle the performance challenges of the modern Web. A highly anticipated feature is Server Push, enabling servers to send data without explicit client requests, thus potentially saving time. Although guidelines on how to use Server Push emerged, measurements have shown that it can easily be used in a suboptimal way and hurt instead of improving performance. We thus tackle the question if the current Web can make better use of Server Push. First, we enable real-world websites to be replayed in a testbed to study the effects of different Server Push strategies. Using this, we next revisit proposed guidelines to grasp their performance impact. Finally, based on our results, we propose a novel strategy using an alternative server scheduler that enables to interleave resources. This improves the visual progress for some websites, with minor modifications to the deployment. Still, our results highlight the limits of Server Push: a deep understanding of web engineering is required to make optimal use of it, and not every site will benefit.Comment: More information available at https://push.netray.i

    Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images

    Get PDF
    Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available

    MadEye: Boosting Live Video Analytics Accuracy with Adaptive Camera Configurations

    Full text link
    Camera orientations (i.e., rotation and zoom) govern the content that a camera captures in a given scene, which in turn heavily influences the accuracy of live video analytics pipelines. However, existing analytics approaches leave this crucial adaptation knob untouched, instead opting to only alter the way that captured images from fixed orientations are encoded, streamed, and analyzed. We present MadEye, a camera-server system that automatically and continually adapts orientations to maximize accuracy for the workload and resource constraints at hand. To realize this using commodity pan-tilt-zoom (PTZ) cameras, MadEye embeds (1) a search algorithm that rapidly explores the massive space of orientations to identify a fruitful subset at each time, and (2) a novel knowledge distillation strategy to efficiently (with only camera resources) select the ones that maximize workload accuracy. Experiments on diverse workloads show that MadEye boosts accuracy by 2.9-25.7% for the same resource usage, or achieves the same accuracy with 2-3.7x lower resource costs.Comment: 19 pages, 16 figure

    Leveraging Program Analysis to Reduce User-Perceived Latency in Mobile Applications

    Full text link
    Reducing network latency in mobile applications is an effective way of improving the mobile user experience and has tangible economic benefits. This paper presents PALOMA, a novel client-centric technique for reducing the network latency by prefetching HTTP requests in Android apps. Our work leverages string analysis and callback control-flow analysis to automatically instrument apps using PALOMA's rigorous formulation of scenarios that address "what" and "when" to prefetch. PALOMA has been shown to incur significant runtime savings (several hundred milliseconds per prefetchable HTTP request), both when applied on a reusable evaluation benchmark we have developed and on real applicationsComment: ICSE 201

    Mahimahi: A Lightweight Toolkit for Reproducible Web Measurement

    Get PDF
    This demo presents a measurement toolkit, Mahimahi, that records websites and replays them under emulated network conditions. Mahimahi is structured as a set of arbitrarily composable UNIX shells. It includes two shells to record and replay Web pages, RecordShell and ReplayShell, as well as two shells for network emulation, DelayShell and LinkShell. In addition, Mahimahi includes a corpus of recorded websites along with benchmark results and link traces (https://github.com/ravinet/sites). Mahimahi improves on prior record-and-replay frameworks in three ways. First, it preserves the multi-origin nature of Web pages, present in approximately 98% of the Alexa U.S. Top 500, when replaying. Second, Mahimahi isolates its own network traffic, allowing multiple instances to run concurrently with no impact on the host machine and collected measurements. Finally, Mahimahi is not inherently tied to browsers and can be used to evaluate many different applications. A demo of Mahimahi recording and replaying a Web page over an emulated link can be found at http://youtu.be/vytwDKBA-8s. The source code and instructions to use Mahimahi are available at http://mahimahi.mit.edu/

    Understanding and improving Web page load times on modern networks

    No full text
    Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis.Includes bibliographical references (pages 77-80).This thesis first presents a measurement toolkit, Mahimahi, that records websites and replays them under emulated network conditions. Mahimahi improves on prior record-and-replay frameworks by emulating the multi-origin nature of Web pages, isolating its network traffic, and enabling evaluations of a larger set of target applications beyond browsers. Using Mahimahi, we perform a case study comparing current multiplexing protocols, HTTP/1.1 and SPDY, and a protocol in development, QUIC, to a hypothetical optimal protocol. We find that all three protocols are significantly suboptimal and their gaps from the optimal only increase with higher link speeds and RTTs. The reason for these trends is the same for each protocol: inherent source-level dependencies between objects on a Web page and browser limits on the number of parallel flows lead to serialized HTTP requests and prevent links from being fully occupied. To mitigate the effect of these dependencies, we built Cumulus, a user-deployable combination of a content-distribution network and a cloud browser that improves page load times when the user is at a significant delay from a Web page's servers. Cumulus contains a "Mini-CDN"-a transparent proxy running on the user's machine-and a "Puppet": a headless browser run by the user on a well-connected public cloud. When the user loads a Web page, the Mini-CDN forwards the user's request to the Puppet, which loads the entire page and pushes all of the page's objects to the Mini-CDN, which caches them locally. Cumulus benefits from the finding that dependency resolution, the process of learning which objects make up a Web page, accounts for a considerable amount of user-perceived wait time. By moving this task to the Puppet, Cumulus can accelerate page loads without modifying existing Web browsers or servers. We find that on cellular, in-flight Wi-Fi, and transcontinental networks, Cumulus accelerated the page loads of Google's Chrome browser by 1.13-2.36×. Performance was 1.19-2.13× faster than Opera Turbo, and 0.99-1.66× faster than Chrome with Google's Data Compression Proxy.by Ravi Arun Netravali.S.M

    Improving web applications with fine-grained data flows

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 131-144).Web applications have significantly increased in complexity over the past several decades to support the wide range of services and performance requirements that users have come to expect. On the client-side, browsers are multi-process systems that can handle numerous content formats, rich interactivity, and asynchronous 10 patterns. On the server-side, applications are distributed across many machines and employ multi-tier architectures to implement application logic, caching, and persistent storage. As a result, web applications have become complex distributed systems that are difficult to understand, debug, and optimize. This dissertation presents fine-grained data flows as a new mechanism for understanding and optimizing complex web applications. Fine-grained data flows comprise the set of low-level reads and writes made to distributed application state during execution. We explain how fine-grained data flows can be tracked efficiently in production systems. We then present four concrete systems that illustrate how fine-grained data flows enable powerful performance optimizations and debugging primitives. Polaris dynamically reorders client HTTP requests during a page load to maximally overlap network round trips without violating data flow dependencies, reducing page load times by 34% (1.3 seconds). Prophecy uses data flow logs to create a snapshot of a mobile page's post-load state, which clients can process to elide intermediate computations, reducing bandwidth consumption by 21%, energy usage by 36%, and load times by 53% (2.8 seconds). Vesper is the first system to accurately and automatically measure page time-to-interactivity, without using heuristics or developer annotations. Vesper determines a page's interactive state by firing event handlers and analyzing the resulting data flows. Vesper-guided optimizations improve time- to- interactivity by 32%, generating more satisfaction in user studies than systems targeting past metrics. Cascade is the first replay debugger to support distributed, fine-grained provenance tracking. Cascade also enables speculative bug fix analysis, i.e., replaying a program to a point, changing program state, and resuming replay, using data flow tracking and causal analysis to evaluate potential bug fixes.by Ravi Arun Netravali.Ph. D
    corecore